help reduce bias
AWS announces SageMaker Clarify to help reduce bias in machine learning models – TechCrunch
As companies rely increasingly on machine learning models to run their businesses, it's imperative to include anti-bias measures to ensure these models are not making false or misleading assumptions. Today at AWS re:Invent, AWS introduced Amazon SageMaker Clarify to help reduce bias in machine learning models. "We are launching Amazon SageMaker Clarify. And what that does is it allows you to have insight into your data and models throughout your machine learning lifecycle," Bratin Saha, Amazon VP and general manager of machine learning told TechCrunch. He says that it is designed to analyze the data for bias before you start data prep, so you can find these kinds of problems before you even start building your model.
A Simple Tactic That Could Help Reduce Bias in AI
That's an emerging conclusion of research-based findings -- including my own -- that could lead to AI-enabled decision-making systems being less subject to bias and better able to promote equality. This is a critical possibility, given our growing reliance on AI-based systems to render evaluations and decisions in high-stakes human contexts, in everything from court decisions, to hiring, to access to credit, and more. It's been well-established that AI-driven systems are subject to the biases of their human creators -- we unwittingly "bake" biases into systems by training them on biased data or with "rules" created by experts with implicit biases. Consider the Allegheny Family Screening Tool (AFST), an AI-based system predicting the likelihood a child is in an abusive situation using data from the same-named Pennsylvania county's Department of Human Services -- including records from public agencies related to child welfare, drug and alcohol services, housing, and others. Caseworkers use reports of potential abuse from the community, along with whatever publicly-available data they can find for the family involved, to run the model, which predicts a risk score from 1 to 20; a sufficiently high score triggers an investigation.
- North America > United States > Pennsylvania (0.25)
- North America > United States > California (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
IBM builds a more diverse million-face data set to help reduce bias in AI
Encoding biases into machine learning models, and in general into the constructs we refer to as AI, is nearly inescapable -- but we can sure do better than we have in past years. IBM is hoping that a new database of a million faces more reflective of those in the real world will help. Facial recognition is being relied on for everything from unlocking your phone to your front door, and is being used to estimate your mood or likelihood to commit criminal acts -- and we may as well admit many of these applications are bunk. But even the good ones often fail simple tests like working adequately with people of certain skin tones or ages. This is a multi-layered problem, and of course a major part of it is that many developers and creators of these systems fail to think about, let alone audit for, a failure of representation in their data.
We need to shine more light on algorithms so they can help reduce bias, not perpetuate it
It was a striking story. "Machine Bias," the headline read, and the teaser proclaimed: "There's software used across the country to predict future criminals. And it's biased against blacks." ProPublica, a Pulitzer Prize–winning nonprofit news organization, had analyzed risk assessment software known as COMPAS. It is being used to forecast which criminals are most likely to reoffend.
- Media > News (0.36)
- Law > Criminal Law (0.31)